25 research outputs found

    Implanting Life-Cycle Privacy Policies in a Context Database

    Get PDF
    Ambient intelligence (AmI) environments continuously monitor surrounding individuals' context (e.g., location, activity, etc.) to make existing applications smarter, i.e., make decision without requiring user interaction. Such AmI smartness ability is tightly coupled to quantity and quality of the available (past and present) context. However, context is often linked to an individual (e.g., location of a given person) and as such falls under privacy directives. The goal of this paper is to enable the difficult wedding of privacy (automatically fulfilling users' privacy whishes) and smartness in the AmI. interestingly, privacy requirements in the AmI are different from traditional environments, where systems usually manage durable data (e.g., medical or banking information), collected and updated trustfully either by the donor herself, her doctor, or an employee of her bank. Therefore, proper information disclosure to third parties constitutes a major privacy concern in the traditional studies

    Balancing smartness and privacy for the Ambient Intelligence

    Get PDF
    Ambient Intelligence (AmI) will introduce large privacy risks. Stored context histories are vulnerable for unauthorized disclosure, thus unlimited storing of privacy-sensitive context data is not desirable from the privacy viewpoint. However, high quality and quantity of data enable smartness for the AmI, while less and coarse data benefit privacy. This raises a very important problem to the AmI, that is, how to balance the smartness and privacy requirements in an ambient world. In this article, we propose to give to donors the control over the life cycle of their context data, so that users themselves can balance their needs and wishes in terms of smartness and privacy

    Data Degradation: Making Private Data Less Sensitive Over Time

    Get PDF
    Trail disclosure is the leakage of privacy sensitive data, resulting from negligence, attack or abusive scrutinization or usage of personal digital trails. To prevent trail disclosure, data degradation is proposed as an alternative to the limited retention principle. Data degradation is based on the assumption that long lasting purposes can often be satisfied with a less accurate, and therefore less sensitive, version of the data. Data will be progressively degraded such that it still serves application purposes, while decreasing accuracy and thus privacy sensitivity

    The right expert at the right time and place: From expertise identification to expertise selection

    Get PDF
    We propose a unified and complete solution for expert finding in organizations, including not only expertise identification, but also expertise selection functionality. The latter two include the use of implicit and explicit preferences of users on meeting each other, as well as localization and planning as important auxiliary processes. We also propose a solution for privacy protection, which is urgently required in view of the huge amount of privacy sensitive data involved. Various parts are elaborated elsewhere, and we look forward to a realization and usage of the proposed system as a whole

    The Life-Cycle Policy model

    Get PDF
    Our daily life activity leaves digital trails in an increasing number of databases (commercial web sites, internet service providers, search engines, location tracking systems, etc). Personal digital trails are commonly exposed to accidental disclosures resulting from negligence or piracy and to ill-intentioned scrutinization and abusive usages fostered by fuzzy privacy policies. No one is sheltered because a single event (e.g., applying for a job or a credit) can suddenly make our history a precious asset. By definition, access control fails preventing trail disclosures, motivating the integration of the Limited Data Retention principle in legislations protecting data privacy. By this principle, data is withdrawn from a database after a predefined time period. However, this principle is difficult to apply in practice, leading to retain useless sensitive information for years in databases. In this paper, we propose a simple and practical data degradation model where sensitive data undergoes a progressive and irreversible degradation from an accurate state at collection time, to intermediate but still informative degraded states, up to complete disappearance when the data becomes useless. The benefits of data degradation is twofold: (i) by reducing the amount of accurate data, the privacy offence resulting from a trail disclosure is drastically reduced and (ii) degrading the data in line with the application purposes offers a new compromise between privacy preservation and application reach. We introduce in this paper a data degradation model, analyze its impact over core database techniques like storage, indexation and transaction management and propose degradation-aware techniques

    Life-cycle privacy policies for the ambient intelligence

    No full text
    CTIT Symposium Poster Competition, 2nd price -http://www.ctit.utwente.nl/newsflash/awards2006.doc

    Degrading Data, Defining Privacy

    No full text
    International audienc
    corecore